338 research outputs found
Preface
BNAIC is the annual Benelux Conference on Artificial Intelligence. In 2017, the 29th edition of BNAIC was organized by the Institute of Artificial Intelligence and Cognitive Engineering (ALICE), University of Groningen, under the auspices of the Benelux Association for Artificial Intelligence (BNVKI) and the Dutch Research School for Information and Knowledge Systems (SIKS). BNAIC 2017 took place in Het Kasteel, Groningen, The Netherlands, on November 8–9, 2017. BNAIC 2017 included invited speakers, research presentations, posters, demonstrations, a deep learning workshop (organized by our sponsor NVIDIA) and a research and business session. Some 160 participants visited the conference
Continuous-action Reinforcement Learning for Playing Racing Games: Comparing SPG to PPO
In this paper, a novel racing environment for OpenAI Gym is introduced. This
environment operates with continuous action- and state-spaces and requires
agents to learn to control the acceleration and steering of a car while
navigating a randomly generated racetrack. Different versions of two
actor-critic learning algorithms are tested on this environment: Sampled Policy
Gradient (SPG) and Proximal Policy Optimization (PPO). An extension of SPG is
introduced that aims to improve learning performance by weighting action
samples during the policy update step. The effect of using experience replay
(ER) is also investigated. To this end, a modification to PPO is introduced
that allows for training using old action samples by optimizing the actor in
log space. Finally, a new technique for performing ER is tested that aims to
improve learning speed without sacrificing performance by splitting the
training into two parts, whereby networks are first trained using state
transitions from the replay buffer, and then using only recent experiences. The
results indicate that experience replay is not beneficial to PPO in continuous
action spaces. The training of SPG seems to be more stable when actions are
weighted. All versions of SPG outperform PPO when ER is used. The ER trick is
effective at improving training speed on a computationally less intensive
version of SPG.Comment: 12 pages, 9 figures. Code is available at
https://github.com/mario-holubar/RacingR
Sampled Policy Gradient for Learning to Play the Game Agar.io
In this paper, a new offline actor-critic learning algorithm is introduced:
Sampled Policy Gradient (SPG). SPG samples in the action space to calculate an
approximated policy gradient by using the critic to evaluate the samples. This
sampling allows SPG to search the action-Q-value space more globally than
deterministic policy gradient (DPG), enabling it to theoretically avoid more
local optima. SPG is compared to Q-learning and the actor-critic algorithms
CACLA and DPG in a pellet collection task and a self play environment in the
game Agar.io. The online game Agar.io has become massively popular on the
internet due to intuitive game design and the ability to instantly compete
against players around the world. From the point of view of artificial
intelligence this game is also very intriguing: The game has a continuous input
and action space and allows to have diverse agents with complex strategies
compete against each other. The experimental results show that Q-Learning and
CACLA outperform a pre-programmed greedy bot in the pellet collection task, but
all algorithms fail to outperform this bot in a fighting scenario. The SPG
algorithm is analyzed to have great extendability through offline exploration
and it matches DPG in performance even in its basic form without extensive
sampling
- …